Cargaremos la data de la siguiente forma:
df <-read.csv("C:\\Users\\eduar\\OneDrive - Instituto Tecnologico y de Estudios Superiores de Monterrey\\Tec Monterrey Adm TI M\\4. Analisis y minería de datos para la toma de desiciones\\A6\\Actividad-6\\processed.switzerland.data", header = F)
Rows: 123
Columns: 14
$ V1 <int> 32, 34, 35, 36, 38, 38, 38, 38, 38, 38, 40, 41, 42, 42, 43, 43, 43…
$ V2 <int> 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ V3 <int> 1, 4, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 4, 4, 3, 3, …
$ V4 <chr> "95", "115", "?", "110", "105", "110", "100", "115", "135", "150",…
$ V5 <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, …
$ V6 <chr> "?", "?", "?", "?", "?", "0", "?", "0", "?", "?", "?", "?", "?", "…
$ V7 <chr> "0", "?", "0", "0", "0", "0", "0", "0", "0", "0", "1", "0", "0", "…
$ V8 <chr> "127", "154", "130", "125", "166", "156", "179", "128", "150", "12…
$ V9 <chr> "0", "0", "1", "1", "0", "0", "0", "1", "0", "1", "0", "0", "1", "…
$ V10 <chr> ".7", ".2", "?", "1", "2.8", "0", "-1.1", "0", "0", "?", "0", "1.6…
$ V11 <chr> "1", "1", "?", "2", "1", "2", "1", "2", "?", "?", "1", "1", "3", "…
$ V12 <chr> "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "?", "…
$ V13 <chr> "?", "?", "7", "6", "?", "3", "?", "7", "3", "3", "?", "?", "?", "…
$ V14 <int> 1, 1, 3, 1, 2, 1, 0, 1, 2, 1, 2, 2, 1, 2, 3, 4, 2, 0, 1, 1, 1, 3, …
Publication Request:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
This file describes the contents of the heart-disease directory.
This directory contains 4 databases concerning heart disease diagnosis.
All attributes are numeric-valued. The data was collected from the
four following locations:
Cleveland Clinic Foundation (cleveland.data)
Hungarian Institute of Cardiology, Budapest (hungarian.data)
V.A. Medical Center, Long Beach, CA (long-beach-va.data)
University Hospital, Zurich, Switzerland (switzerland.data)
Each database has the same instance format. While the databases have 76
raw attributes, only 14 of them are actually used. Thus I’ve taken the
liberty of making 2 copies of each database: one with all the attributes
and 1 with the 14 attributes actually used in past experiments.
The authors of the databases have requested:
…that any publications resulting from the use of the data include the
names of the principal investigator responsible for the data collection
at each institution. They would be:
Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
V.A. Medical Center, Long Beach and Cleveland Clinic Foundation:
Robert Detrano, M.D., Ph.D.
Thanks in advance for abiding by this request.
David Aha
July 22, 1988
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1. Title: Heart Disease Databases
2. Source Information:
-- 1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
-- 2. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
-- 3. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
-- 4. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation:
Robert Detrano, M.D., Ph.D.
Donor: David W. Aha (aha@ics.uci.edu) (714) 856-8779
Date: July, 1988
3. Past Usage:
Sandhu,~S., Guppy,~K., Lee,~S., \& Froelicher,~V. (1989). {\it
International application of a new probability algorithm for the
diagnosis of coronary artery disease.} {\it American Journal of
Cardiology}, {\it 64},304–310.
-- International Probability Analysis
-- Address: Robert Detrano, M.D.
Cardiology 111-C
V.A. Medical Center
5901 E. 7th Street
Long Beach, CA 90028
-- Results in percent accuracy: (for 0.5 probability threshold)
Data Name: CDF CADENZA
-- Hungarian 77 74
Long beach 79 77
Swiss 81 81
-- Approximately a 77% correct classification accuracy with a
logistic-regression-derived discriminant function
--
-- Instance-based prediction of heart-disease presence with the
Cleveland database
-- NTgrowth: 77.0% accuracy
-- C4: 74.8% accuracy
-- Gennari, J.~H., Langley, P, \& Fisher, D. (1989). Models of
incremental concept formation. {\it Artificial Intelligence, 40},
11–61.
-- Results:
-- The CLASSIT conceptual clustering system achieved a 78.9% accuracy
on the Cleveland database.
4. Relevant Information:
This database contains 76 attributes, but all published experiments
refer to using a subset of 14 of them. In particular, the Cleveland
database is the only one that has been used by ML researchers to
this date. The “goal” field refers to the presence of heart disease
in the patient. It is integer valued from 0 (no presence) to 4.
Experiments with the Cleveland database have concentrated on simply
attempting to distinguish presence (values 1,2,3,4) from absence (value
0).
The names and social security numbers of the patients were recently
removed from the database, replaced with dummy values.
One file has been “processed”, that one containing the Cleveland
database. All four unprocessed files also exist in this directory.
5. Number of Instances:
Database: # of instances:
Cleveland: 303
Hungarian: 294
Switzerland: 123
Long Beach VA: 200
6. Number of Attributes: 76 (including the predicted attribute)
7. Attribute Information:
-- Only 14 used
-- 1. #3 (age)
-- 2. #4 (sex)
-- 3. #9 (cp)
-- 4. #10 (trestbps)
-- 5. #12 (chol)
-- 6. #16 (fbs)
-- 7. #19 (restecg)
-- 8. #32 (thalach)
-- 9. #38 (exang)
-- 10. #40 (oldpeak)
-- 11. #41 (slope)
-- 12. #44 (ca)
-- 13. #51 (thal)
-- 14. #58 (num) (the predicted attribute)
-- Complete attribute documentation:
1 id: patient identification number
2 ccf: social security number (I replaced this with a dummy value of 0)
3 age: age in years
4 sex: sex (1 = male; 0 = female)
5 painloc: chest pain location (1 = substernal; 0 = otherwise)
6 painexer (1 = provoked by exertion; 0 = otherwise)
7 relrest (1 = relieved after rest; 0 = otherwise)
8 pncaden (sum of 5, 6, and 7)
9 cp: chest pain type
-- Value 1: typical angina
-- Value 2: atypical angina
-- Value 3: non-anginal pain
-- Value 4: asymptomatic
10 trestbps: resting blood pressure (in mm Hg on admission to the
hospital)
11 htn
12 chol: serum cholestoral in mg/dl
13 smoke: I believe this is 1 = yes; 0 = no (is or is not a smoker)
14 cigs (cigarettes per day)
15 years (number of years as a smoker)
16 fbs: (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
17 dm (1 = history of diabetes; 0 = no such history)
18 famhist: family history of coronary artery disease (1 = yes; 0 = no)
19 restecg: resting electrocardiographic results
-- Value 0: normal
-- Value 1: having ST-T wave abnormality (T wave inversions and/or ST
elevation or depression of > 0.05 mV)
-- Value 2: showing probable or definite left ventricular hypertrophy
by Estes’ criteria
20 ekgmo (month of exercise ECG reading)
21 ekgday(day of exercise ECG reading)
22 ekgyr (year of exercise ECG reading)
23 dig (digitalis used furing exercise ECG: 1 = yes; 0 = no)
24 prop (Beta blocker used during exercise ECG: 1 = yes; 0 = no)
25 nitr (nitrates used during exercise ECG: 1 = yes; 0 = no)
26 pro (calcium channel blocker used during exercise ECG: 1 = yes; 0 = no)
27 diuretic (diuretic used used during exercise ECG: 1 = yes; 0 = no)
28 proto: exercise protocol
1 = Bruce
2 = Kottus
3 = McHenry
4 = fast Balke
5 = Balke
6 = Noughton
7 = bike 150 kpa min/min (Not sure if “kpa min/min” is what was
written!)
8 = bike 125 kpa min/min
9 = bike 100 kpa min/min
10 = bike 75 kpa min/min
11 = bike 50 kpa min/min
12 = arm ergometer
29 thaldur: duration of exercise test in minutes
30 thaltime: time when ST measure depression was noted
31 met: mets achieved
32 thalach: maximum heart rate achieved
33 thalrest: resting heart rate
34 tpeakbps: peak exercise blood pressure (first of 2 parts)
35 tpeakbpd: peak exercise blood pressure (second of 2 parts)
36 dummy
37 trestbpd: resting blood pressure
38 exang: exercise induced angina (1 = yes; 0 = no)
39 xhypo: (1 = yes; 0 = no)
40 oldpeak = ST depression induced by exercise relative to rest
41 slope: the slope of the peak exercise ST segment
-- Value 1: upsloping
-- Value 2: flat
-- Value 3: downsloping
42 rldv5: height at rest
43 rldv5e: height at peak exercise
44 ca: number of major vessels (0-3) colored by flourosopy
45 restckm: irrelevant
46 exerckm: irrelevant
47 restef: rest raidonuclid (sp?) ejection fraction
48 restwm: rest wall (sp?) motion abnormality
0 = none
1 = mild or moderate
2 = moderate or severe
3 = akinesis or dyskmem (sp?)
49 exeref: exercise radinalid (sp?) ejection fraction
50 exerwm: exercise wall (sp?) motion
51 thal: 3 = normal; 6 = fixed defect; 7 = reversable defect
52 thalsev: not used
53 thalpul: not used
54 earlobe: not used
55 cmo: month of cardiac cath (sp?) (perhaps “call”)
56 cday: day of cardiac cath (sp?)
57 cyr: year of cardiac cath (sp?)
58 num: diagnosis of heart disease (angiographic disease status)
-- Value 0: < 50% diameter narrowing
-- Value 1: > 50% diameter narrowing
(in any major vessel: attributes 59 through 68 are vessels)
59 lmt
60 ladprox
61 laddist
62 diag
63 cxmain
64 ramus
65 om1
66 om2
67 rcaprox
68 rcadist
69 lvx1: not used
70 lvx2: not used
71 lvx3: not used
72 lvx4: not used
73 lvf: not used
74 cathef: not used
75 junk: not used
76 name: last name of patient
(I replaced this with the dummy string "name")
9. Missing Attribute Values: Several. Distinguished with value -9.0.
10. Class Distribution:
Database: 0 1 2 3 4 Total
Cleveland: 164 55 36 35 13 303
Hungarian: 188 37 26 28 15 294
Switzerland: 8 48 32 30 5 123
Long Beach VA: 51 56 41 42 10 200
Como se puede ver el nombre de las columnas es por lo tanto:
V1: “Age”
V2: “Sex”
V3: “Chest_Pain_Type”
V4: “Resting_Blood_Pressure”
V5: “Serum_Cholesterol”
V6: “Fasting_Blood_Sugar”
V7: “Resting_ECG”
V8: “Max_Heart_Rate_Achieved”
V9: “Exercise_Induced_Angina”
V10: “ST_Depression_Exercise”
V11: “Peak_Exercise_ST_Segment”
V12: “Num_Major_Vessels_Flouro”
V13: “Thalassemia”
V14: “Diagnosis_Heart_Disease”
Suponemos que existen más columnas dentro de esta base de datos que no fueron proporcionadas por ejemplo:
ccf: social security number
relrest (1 = relieved after rest; 0 = otherwise)
pncaden (sum of 5, 6, and 7)
Otras más
3 = normales
6 = defecto fijo
7 = defecto reversible
Valor 0: < 50% estrechamiento del diámetro
Valor 1: > 50% estrechamiento del diámetro
(en cualquier buque principal: los atributos 59 a 68 son buques)
¿Se entiende que si es 0 no hay enfermedad?
El atributo a predecir es Diagnosis of Heart Disease
| Age | Sex | Chest Pain Type | Resting Blood Pressure | Serum Cholesterol | Fasting Blood Sugar | Resting ECG | Max Heart Rate Achieved | Exercise Induced Angina | ST Depression Exercise | Peak Exercise ST Segment | Num Major Vessels Flouro | Thal | Diagnosis Heart Disease |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 32 | 1 | 1 | 95 | 0 | ? | 0 | 127 | 0 | .7 | 1 | ? | ? | 1 |
| 34 | 1 | 4 | 115 | 0 | ? | ? | 154 | 0 | .2 | 1 | ? | ? | 1 |
| 35 | 1 | 4 | ? | 0 | ? | 0 | 130 | 1 | ? | ? | ? | 7 | 3 |
| 36 | 1 | 4 | 110 | 0 | ? | 0 | 125 | 1 | 1 | 2 | ? | 6 | 1 |
| 38 | 0 | 4 | 105 | 0 | ? | 0 | 166 | 0 | 2.8 | 1 | ? | ? | 2 |
| 38 | 0 | 4 | 110 | 0 | 0 | 0 | 156 | 0 | 0 | 2 | ? | 3 | 1 |
| 38 | 1 | 3 | 100 | 0 | ? | 0 | 179 | 0 | -1.1 | 1 | ? | ? | 0 |
| 38 | 1 | 3 | 115 | 0 | 0 | 0 | 128 | 1 | 0 | 2 | ? | 7 | 1 |
| 38 | 1 | 4 | 135 | 0 | ? | 0 | 150 | 0 | 0 | ? | ? | 3 | 2 |
| 38 | 1 | 4 | 150 | 0 | ? | 0 | 120 | 1 | ? | ? | ? | 3 | 1 |
Para empezar tenemos un monton de datos con signos de interrogación, veamos cuanto N/A tenemos con la función
sum(is.na(df))
[1] 0
Al menos no tenemos NA.
Sin embargo, tenemos un chorro de valores “?”, por ejemplo:
Por lo que, apesar que creemos que son importantes, tendremos que eliminar estas columnas ya que no nos aportan valor, y rellenar podría a llevar a conjeturas erroneas.
También queremos saber el número de observaciones en la columna de la variable objetivo para comprender si el conjunto de datos está relativamente equilibrado.
| Diagnosis Heart Disease | n |
|---|---|
| 0 | 8 |
| 1 | 48 |
| 2 | 32 |
| 3 | 30 |
| 4 | 5 |
Los datos estan claramente desvalanceados, existen muchos más enfermos
Según el analisis anterior que realizamos de la columna objetivo indicamos que:
Valor 0: < 50% estrechamiento del diámetro
Valor 1: > 50% estrechamiento del diámetro
(en cualquier buque principal: los atributos 59 a 68 son buques)
¿Se entiende que si es 0 no hay enfermedad?
Mantendremos esta relación y todos los valores que sean por arriba de 0 serán 1 por lo tanto:
Valor 0: No hay enfermedad
Valor 1: Hay enfermedad
Además mutaremos las variables para trabajarlas como factores y
eliminaremos las columnas que no aportan (5,6,12), así como
combertiremos en NA aquellas observaciones con ? para
despues tirarlas
| Age | Sex | Chest Pain Type | Resting Blood Pressure | Resting ECG | Max Heart Rate Achieved | Exercise Induced Angina | ST Depression Exercise | Peak Exercise ST Segment | Thal | Diagnosis Heart Disease |
|---|---|---|---|---|---|---|---|---|---|---|
| 36 | 1 | 4 | 110 | 0 | 125 | 1 | 1.0 | 2 | 6 | 1 |
| 38 | 0 | 4 | 110 | 0 | 156 | 0 | 0.0 | 2 | 3 | 1 |
| 38 | 1 | 3 | 115 | 0 | 128 | 1 | 0.0 | 2 | 7 | 1 |
| 43 | 1 | 4 | 115 | 0 | 145 | 1 | 2.0 | 2 | 7 | 1 |
| 43 | 1 | 4 | 140 | 1 | 140 | 1 | 0.5 | 1 | 7 | 1 |
| 46 | 1 | 4 | 115 | 0 | 113 | 1 | 1.5 | 2 | 7 | 1 |
| 47 | 1 | 3 | 155 | 0 | 118 | 1 | 1.0 | 2 | 3 | 1 |
| 47 | 1 | 4 | 160 | 0 | 124 | 1 | 0.0 | 2 | 7 | 1 |
| 48 | 1 | 4 | 115 | 0 | 128 | 0 | 0.0 | 2 | 6 | 1 |
| 50 | 1 | 4 | 115 | 0 | 120 | 1 | 0.5 | 2 | 6 | 1 |
| 50 | 1 | 4 | 120 | 1 | 156 | 1 | 0.0 | 1 | 6 | 1 |
| 51 | 1 | 4 | 120 | 0 | 104 | 0 | 0.0 | 2 | 3 | 1 |
| 51 | 1 | 4 | 140 | 0 | 60 | 0 | 0.0 | 2 | 3 | 1 |
| 52 | 1 | 4 | 130 | 0 | 120 | 0 | 0.0 | 2 | 7 | 1 |
| 52 | 1 | 4 | 135 | 0 | 128 | 1 | 2.0 | 2 | 7 | 1 |
| 52 | 1 | 4 | 165 | 0 | 122 | 1 | 1.0 | 1 | 7 | 1 |
| 53 | 1 | 2 | 120 | 0 | 95 | 0 | 0.0 | 2 | 3 | 1 |
| 53 | 1 | 3 | 105 | 0 | 115 | 0 | 0.0 | 2 | 7 | 1 |
| 53 | 1 | 4 | 120 | 0 | 120 | 0 | 0.0 | 2 | 7 | 1 |
| 53 | 1 | 4 | 130 | 2 | 135 | 1 | 1.0 | 2 | 7 | 1 |
| 54 | 1 | 4 | 120 | 0 | 155 | 0 | 0.0 | 2 | 7 | 1 |
| 54 | 1 | 4 | 130 | 0 | 110 | 1 | 3.0 | 2 | 7 | 1 |
| 54 | 1 | 4 | 180 | 0 | 150 | 0 | 1.5 | 2 | 7 | 1 |
| 55 | 1 | 4 | 120 | 1 | 92 | 0 | 0.3 | 1 | 7 | 1 |
| 55 | 1 | 4 | 140 | 0 | 83 | 0 | 0.0 | 2 | 7 | 1 |
| 56 | 1 | 3 | 120 | 0 | 97 | 0 | 0.0 | 2 | 7 | 0 |
| 56 | 1 | 3 | 125 | 0 | 98 | 0 | -2.0 | 2 | 7 | 1 |
| 56 | 1 | 3 | 155 | 1 | 99 | 0 | 0.0 | 2 | 3 | 1 |
| 56 | 1 | 4 | 120 | 1 | 100 | 1 | -1.0 | 3 | 7 | 1 |
| 56 | 1 | 4 | 125 | 0 | 103 | 1 | 1.0 | 2 | 7 | 1 |
| 57 | 1 | 4 | 140 | 0 | 120 | 1 | 2.0 | 2 | 6 | 1 |
| 57 | 1 | 4 | 160 | 0 | 98 | 1 | 2.0 | 2 | 7 | 1 |
| 58 | 1 | 4 | 130 | 1 | 100 | 1 | 1.0 | 2 | 6 | 1 |
| 59 | 1 | 4 | 120 | 0 | 115 | 0 | 0.0 | 2 | 3 | 1 |
| 59 | 1 | 4 | 135 | 0 | 115 | 1 | 1.0 | 2 | 7 | 1 |
| 60 | 1 | 4 | 135 | 0 | 63 | 1 | 0.5 | 1 | 7 | 1 |
| 60 | 1 | 4 | 160 | 1 | 99 | 1 | 0.5 | 2 | 7 | 1 |
| 61 | 1 | 4 | 125 | 0 | 105 | 1 | 0.0 | 3 | 7 | 1 |
| 61 | 1 | 4 | 130 | 2 | 115 | 0 | 0.0 | 2 | 7 | 1 |
| 61 | 1 | 4 | 150 | 0 | 105 | 1 | 0.0 | 2 | 7 | 1 |
| 61 | 1 | 4 | 150 | 0 | 117 | 1 | 2.0 | 2 | 7 | 1 |
| 61 | 1 | 4 | 160 | 1 | 145 | 0 | 1.0 | 2 | 7 | 1 |
| 62 | 1 | 3 | 160 | 0 | 72 | 1 | 0.0 | 2 | 3 | 1 |
| 62 | 1 | 4 | 115 | 0 | 72 | 1 | -0.5 | 2 | 3 | 1 |
| 62 | 1 | 4 | 150 | 1 | 78 | 0 | 2.0 | 2 | 7 | 1 |
| 63 | 1 | 4 | 185 | 0 | 98 | 1 | 0.0 | 1 | 7 | 1 |
| 64 | 0 | 4 | 200 | 0 | 140 | 1 | 1.0 | 2 | 3 | 1 |
| 65 | 1 | 4 | 115 | 0 | 93 | 1 | 0.0 | 2 | 7 | 1 |
| 66 | 1 | 4 | 150 | 0 | 108 | 1 | 2.0 | 2 | 7 | 1 |
| 67 | 1 | 1 | 145 | 2 | 125 | 0 | 0.0 | 2 | 3 | 1 |
| 68 | 1 | 4 | 135 | 1 | 120 | 1 | 0.0 | 1 | 7 | 1 |
| 69 | 1 | 4 | 135 | 0 | 130 | 0 | 0.0 | 2 | 6 | 1 |
| 70 | 1 | 4 | 115 | 1 | 92 | 1 | 0.0 | 2 | 7 | 1 |
| 70 | 1 | 4 | 140 | 0 | 157 | 1 | 2.0 | 2 | 7 | 1 |
| 73 | 0 | 3 | 160 | 1 | 121 | 0 | 0.0 | 1 | 3 | 1 |
Como se puede ver quedarón solo 55 observaciones y se eliminaron columnas que creemos pudieron haber sido interesantes analisar, sin embargo, como se consulto con el Dr. Diógenes Alexander Garrido, no se consultaran estos atributos
Recordando un poco de estadistica, según el teorema del limite central tenemos suficientes datos para poder aproximar a la población, sin embargo, un analisis cercano a big data o minería de datos queda corto.
Esta parte fue un poco dificil de llevar a cabo y sentimos que realmente estamos perdiendo mucha información.
Evaluaremos algunas relaciones con la variable objetivo. Cada grafico estara descrito en la parte inferior.
Como se puede apreciar, debido a que se dejaron ir todas aquellas
observacioens con ‘?' nuestros datos se redujeron demasiado
y la población no enferma (0) se redujeron considerablemente hasta el
punto en que solo tenemos una sola observación. :’(
Pensabamos realizar más analisis con esta data “limpia” sin embargo,
nos arriesgaremos a usar la data con nulos dfh
No es sorpresa que nuestros datos sigan desvalanceados, desde el primer conteo de valores identificamos que las observaciones con 0 eran minoria (solo 10). Seguiremos describiendo.
Como podemos observar, no existen mujeres sanas, por otro lado las mujeres que más enferman tienen a tener mayor edad que los hombres, además de que aparentemente existe un sesgo ya que la mediana se acerca más al tercer cuartil.
Podemos ver que los que llegaron al hospital experimentaban dolores como “dolor no anginoso”, “angina asintomática” y muy pocos casos de “angina atípica”.
Por otro lado, los que si tenian enfermedad cardiaca experimentaron los 4 tipos de dolor pero en su mayoría el tipo numero 2, es decir “angina atípica”
Esta no la pudimos interpretar pero un cardiologo podría obtener información que le pueda ser de utilidad.
Crearemos una matriz de correlación con el método de Pearson sin embargo, esta base de datos tiene un excendete de valores sesgados por lo que también estaremos usando el método de Kendall
Age Resting_Blood_Pressure
Age 1.00000000 0.36394744
Resting_Blood_Pressure 0.36394744 1.00000000
Max_Heart_Rate_Achieved -0.25988609 0.02021897
ST_Depression_Exercise -0.03219504 0.25458948
Thal -0.02213109 -0.10694451
Max_Heart_Rate_Achieved ST_Depression_Exercise
Age -0.25988609 -0.03219504
Resting_Blood_Pressure 0.02021897 0.25458948
Max_Heart_Rate_Achieved 1.00000000 0.22328465
ST_Depression_Exercise 0.22328465 1.00000000
Thal 0.12484944 0.24057602
Thal
Age -0.02213109
Resting_Blood_Pressure -0.10694451
Max_Heart_Rate_Achieved 0.12484944
ST_Depression_Exercise 0.24057602
Thal 1.00000000
Age Resting_Blood_Pressure
Age 1.00000000 0.30385195
Resting_Blood_Pressure 0.30385195 1.00000000
Max_Heart_Rate_Achieved -0.20041700 0.01206703
ST_Depression_Exercise -0.05253703 0.22663641
Thal 0.02947812 0.02344036
Max_Heart_Rate_Achieved ST_Depression_Exercise
Age -0.20041700 -0.05253703
Resting_Blood_Pressure 0.01206703 0.22663641
Max_Heart_Rate_Achieved 1.00000000 0.17937257
ST_Depression_Exercise 0.17937257 1.00000000
Thal -0.01135582 0.22024489
Thal
Age 0.02947812
Resting_Blood_Pressure 0.02344036
Max_Heart_Rate_Achieved -0.01135582
ST_Depression_Exercise 0.22024489
Thal 1.00000000
Existen algunas diferencias insignificativas, lo importante de este análisis es observar que ninguna de las variables pareciese estar correlacionada significativamente. Por lo que podemos estar tranquilos y no eliminar más columnas
Ya sabemos, a ojo de buen cubero, que nuestros datos no estan balanceados pero haremos el análisis de cada variable.
Sex
Diagnosis_Heart_Disease 0 1
0 0 1
1 3 51
Chest_Pain_Type
Diagnosis_Heart_Disease 1 2 3 4
0 0 0 1 0
1 1 1 7 45
Resting_ECG
Diagnosis_Heart_Disease ? 0 1 2
0 0 1 0 0
1 0 39 12 3
Exercise_Induced_Angina
Diagnosis_Heart_Disease ? 0 1
0 0 1 0
1 0 21 33
Peak_Exercise_ST_Segment
Diagnosis_Heart_Disease ? 1 2 3
0 0 0 1 0
1 0 8 44 2
La mayoría de nuestras columnas estan desproporcionadas. T_T
Se penso en dividir nuestros datos en training y testin, pero con 55 observaciones no vale la pena, por lo que aplicaremos el modelo y lo analizaremos.
Call:
glm(formula = Diagnosis_Heart_Disease ~ ., family = quasibinomial,
data = dfc)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.386e-05 2.100e-08 2.100e-08 6.090e-07 3.327e-05
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 169.56392 42.74201 3.967 0.000294 ***
Age -1.86239 0.69753 -2.670 0.010911 *
Sex1 53.04647 5.78360 9.172 2.20e-11 ***
Chest_Pain_Type2 -49.80215 6.58024 -7.568 3.07e-09 ***
Chest_Pain_Type3 -20.71370 4.98303 -4.157 0.000165 ***
Chest_Pain_Type4 59.15288 6.32286 9.355 1.27e-11 ***
Resting_Blood_Pressure -0.99153 0.08435 -11.755 1.50e-14 ***
Resting_ECG1 77.33242 3.79159 20.396 < 2e-16 ***
Resting_ECG2 -35.83130 7.07914 -5.062 9.73e-06 ***
Max_Heart_Rate_Achieved 1.22969 0.05736 21.437 < 2e-16 ***
Exercise_Induced_Angina1 52.48834 1.71440 30.616 < 2e-16 ***
ST_Depression_Exercise -24.13400 0.43887 -54.991 < 2e-16 ***
Peak_Exercise_ST_Segment2 9.98196 2.94629 3.388 0.001592 **
Peak_Exercise_ST_Segment3 -71.11725 4.53786 -15.672 < 2e-16 ***
Thal -18.45166 0.43710 -42.213 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for quasibinomial family taken to be 1.118424e-10)
Null deviance: 9.9964e+00 on 54 degrees of freedom
Residual deviance: 3.2918e-09 on 40 degrees of freedom
AIC: NA
Number of Fisher Scoring iterations: 25
Como podemos ver, el modelo me esta diciendo que todas las variables
son explicativas y con un \(p-value\)
muy bueno. Creemos que esto se debe a todas los valores que eliminamos
como los nulos y los ?.
Los valores residuales no son más que las distancias de los valores observados y el modelo que se genera, estos valores también son conocidos como “errores”.
La desviación residual nos dice qué tan bien se puede predecir la variable de respuesta mediante un modelo con \(p\) variables predictoras. Cuanto menor sea el valor, mejor podrá el modelo predecir el valor de la variable de respuesta.
Para determinar si un modelo es “útil”, podemos calcular la estadística Chi-Square como:
\[\chi^2=Null deviance – Residual deviance\] No haremos la matematica ya que creemos que al tener en su mayoría problemas cardiacos, nuestra data no esta bien balanceada.
fitted
en el eje de las \(x\), and \(y-\hat{y}\) los residuales en el eje de las
\(y\). Como se puede ver los residuos
se concentran en su mayoría en la línea 0. Esto sugiere que la
suposición de que la relación es lineal es razonable.